Winner-relaxing and winner-enhancing Kohonen maps: Maximal mutual information from enhancing the winner
نویسنده
چکیده
The magnification behaviour of a generalized family of self-organizing feature maps, the Winner Relaxing and Winner Enhancing Kohonen algorithms is analyzed by the magnification law in the one-dimensional case, which can be obtained analytically. The Winner-Enhancing case allows to acheive a magnification exponent of one and therefore provides optimal mapping in the sense of information theory. A numerical verification of the magnification law is included, and the ordering behaviour is analyzed. Compared to the original Self-Organizing Map and some other approaches, the generalized Winner Enforcing Algorithm requires minimal extra computations per learning step and is conveniently easy to implement.
منابع مشابه
Magnification Laws of Winner-Relaxing and Winner-Enhancing Kohonen Feature Maps
Self-Organizing Maps are models for unsupervised representation formation of cortical receptor fields by stimuli-driven self-organization in laterally coupled winner-take-all feedforward structures. This paper discusses modifications of the original Kohonen model that were motivated by a potential function, in their ability to set up a neural mapping of maximal mutual information. Enhancing the...
متن کاملGeneralized Winner-Relaxing Kohonen Self-Organizing Feature Maps
We calculate analytically the magnification behaviour of a generalized family of self-organizing feature maps inspired by a variant introduced by Kohonen in 1991, denoted here as Winner Relaxing Kohonen algorithm, which is shown here to have a magnification exponent of 4/7. Motivated by the observation that a modification of the learning rule for the winner neuron influences the magnification l...
متن کاملWinner-Relaxing Self-Organizing Maps
A new family of self-organizing maps, the Winner-Relaxing Kohonen Algorithm, is introduced as a generalization of a variant given by Kohonen in 1991. The magnification behaviour is calculated analytically. For the original variant a magnification exponent of 4/7 is derived; the generalized version allows to steer the magnification in the wide range from exponent 1/2 to 1 in the one-dimensional ...
متن کاملA Fast Winner-Take-All Neural Networks With the Dynamic Ratio
In this paper, we propose a fast winner-take-all (WTA) neural network. The fast winner-take-all neural network with the dynamic ratio in mutual-inhibition is developed from the general mean-based neural network (GEMNET), which adopts the mean of the active neurons as the threshold of mutual inhibition. Furthermore, the other winner-take-all neural network enhances the convergence speed to becom...
متن کاملTowards an Information Density Measure for Neural Feature Maps
Many neural models have been suggested for the development of feature maps in cortical areas. Undoubtedly the most popular model is the Kohonen self-organizing map (SOM). Once the map has been learned, this network uses a competitive winner-take-all (WTA) approach to choose a singlèbest' output neuron on a (typically) 2D grid for each presented input pattern. Cortical maps in biological organis...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Complexity
دوره 8 شماره
صفحات -
تاریخ انتشار 2003